Combining Adversaries with Anti-adversaries in Training

نویسندگان

چکیده

Adversarial training is an effective learning technique to improve the robustness of deep neural networks. In this study, influence adversarial on models in terms fairness, robustness, and generalization theoretically investigated under more general perturbation scope that different samples can have directions (the anti-adversarial directions) varied bounds. Our theoretical explorations suggest combination adversaries anti-adversaries (samples with perturbations) be achieving better fairness between classes a tradeoff some typical scenarios (e.g., noisy label imbalance learning) compared standard training. On basis our findings, objective combines bounds each sample presented. Meta utilized optimize weights. Experiments benchmark datasets verify findings effectiveness proposed methodology.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Synthetic Adversaries for Urban Combat Training

82 AI MAGAZINE ■ This article describes requirements for synthetic adversaries for urban combat training and a prototype application, MOUTBots. MOUTBots use a commercial computer game to define, implement, and test basic behavior representation requirements and the Soar architecture as the engine for knowledge representation and execution. The article describes how these components aided the de...

متن کامل

On Dealing with Adversaries Fairly

Peer-to-peer systems are often vulnerable to disruption by minorities. There are several strategies for dealing with this problem, but ultimately many of them come down to some kind of voting or collaborative filtering mechanism. Yet there exists a large literature on voting theory, also known as social choice theory. In this note we outline some of its key results and try to apply them to a nu...

متن کامل

Billion-Gate Secure Computation with Malicious Adversaries

The goal of this paper is to assess the feasibility of two-party secure computation in the presence of a malicious adversary. Prior work has shown the feasibility of billion-gate circuits in the semi-honest model, but only the 35k-gate AES circuit in the malicious model, in part because security in the malicious model is much harder to achieve. We show that by incorporating the best known techn...

متن کامل

Private Web Search with Malicious Adversaries

Web search has become an integral part of our lives and we use it daily for business and pleasure. Unfortunately, however, we unwittingly reveal a huge amount of private information about ourselves when we search the web. A look at a user’s search terms over a period of a few months paints a frighteningly clear and detailed picture about the user’s life. In this paper, we build on previous work...

متن کامل

Log-loss games with bounded adversaries

Worst-case analysis of the game assumes, by definition, that the adversary is trying to minimize the learner’s regret without any restrictions on the resources it uses while doing so. In practice, however, it may not be necessary (or indeed desirable) to get bounds of this kind—real-world data are typically generated by processes of bounded computational power, memory, etc., and it would be use...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i9.26352